165 research outputs found

    El circuito eléctrico de corriente continua

    Get PDF
    Diapositivas del tema 1: El circuito eléctrico de corriente continua

    Safe human-robot interaction based on dynamic sphere-swept line bounding volumes

    Get PDF
    This paper presents a geometric representation for human operators and robotic manipulators, which cooperate in the development of flexible tasks. The main goal of this representation is the implementation of real-time proximity queries, which are used by safety strategies for avoiding dangerous collisions between humans and robotic manipulators. This representation is composed of a set of bounding volumes based on swept-sphere line primitives, which encapsulate their links more precisely than previous sphere-based models. The radius of each bounding volume does not only represent the size of the encapsulated link, but it also includes an estimation of its motion. The radii of these dynamic bounding volumes are obtained from an algorithm which computes the linear velocity of each link. This algorithm has been implemented for the development of a safety strategy in a real human–robot interaction task.This work is funded by the Spanish Ministry of Education and the Spanish Ministry of Science and Innovation through the projects DPI2005-06222 and DPI2008-02647 and the grant AP2005-1458

    Targetless Camera-LiDAR Calibration in Unstructured Environments

    Get PDF
    The camera-Lidar sensor fusion plays an important role in autonomous navigation research. Nowadays, the automatic calibration of these sensors remains a significant challenge in mobile robotics. In this article, we present a novel calibration method that achieves an accurate six-degree-of-freedom (6-DOF) rigid-body transformation estimation (aka extrinsic parameters) between the camera and LiDAR sensors. This method consists of a novel co-registration approach that uses local edge features in arbitrary environments to get 3D-to-2D errors between the data of both, camera and LiDAR. Once we have 3D-to-2D errors, we estimate the relative transform, i.e., the extrinsic parameters, that minimizes these errors. In order to find the best transform solution, we use the perspective-three-point (P3P) algorithm. To refine the final calibration, we use a Kalman Filter, which gives the system high stability against noise disturbances. The presented method does not require, in any case, an artificial target, or a structured environment, and therefore, it is a target-less calibration. Furthermore, the method we present in this article does not require to achieve a dense point cloud, which holds the advantage of not needing a scan accumulation. To test our approach, we use the state-of-the-art Kitti dataset, taking the calibration provided by the dataset as the ground truth. In this way, we achieve accuracy results, and we demonstrate the robustness of the system against very noisy observations.This work was supported by the Regional Valencian Community Government and the European Regional Development Fund (ERDF) through the grants ACIF/2019/088 and AICO/2019/020

    Tecnologías en la inteligencia ambiental

    Get PDF
    En este artículo se presenta el término “Inteligencia Ambiental” (AmI) y se describen las distintas tecnologías que posibilitan su desarrollo: la computación ubicua, la comunicación ubicua y las interfaces inteligentes. Enumeraremos las distintas técnicas de localización que se usan en los entornos inteligentes para determinar la ubicación del usuario y poder ofrecerle así los servicios más adecuados. Para comprender mejor las posibilidades de la AmI, también mostraremos sus principales áreas de aplicación.Este trabajo ha sido financiado por el Ministerio de Educación y Ciencia (MEC) de España mediante el proyecto DPI2005-06222 “Diseño, Implementación y Experimentación de Escenarios de Manipulación Inteligentes para Aplicaciones de Ensamblado y Desensamblado Automático” y mediante la beca de postgrado FPU AP2005-1458

    MOGEDA : Modelo Genérico de Desensamblado Automático

    Get PDF
    El desensamblado de productos es la clave del proceso de reciclado. En este artículo se plantea el modelado del proceso de desensamblado automático de productos. Se estudian, tanto los requerimientos necesarios para poder abordar el proceso de forma automática, como las herramientas necesarias para poderlo llevar a cabo: base de conocimiento basada en modelos y técnicas de reconocimiento y localización tridimensional de objetos mediante visión artificial.Tanto los trabajos realizados como los futuros están enmarcados en el proyecto de la CICYT “Sistema Robotizado de Desensamblado Automático basado en Modelos y Visión Artificial” (TAP1999-0436)

    Web-based OERs in Computer Networks

    Get PDF
    Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the successfully use in education of social learning technologies and virtual laboratories designed by the authors, as well as videos developed by the students. These tools, combined with other open educational resources based on a blended-learning methodology, have been employed to teach the subject of Computer Networks. We have verified not only that the application of OERs into the learning process leads to a significantly improvement of the assessments, but also that the combination of several OERs enhances their effectiveness. These results are supported by, firstly, a study of both students’ opinion and students’ behaviour over five academic years, and, secondly, a correlation analysis between the use of OERs and the grades obtained by students

    Diseño de una mini-cámara motorizada para el seguimiento de objetos

    Get PDF
    Comunicación presentada en las XXIX Jornadas de Automática, Tarragona, 3-5 Septiembre 2008.En este artículo se presenta un mini-sistema de visión basado en una cámara CMOS inalámbrica motorizada que permite realizar el seguimiento de objetivos móviles. Este sistema emplea características de color e histogramas para detectar el objeto presente en la imagen con un bajo coste computacional. La novedad de este trabajo reside en la flexibilidad del sistema diseñado para ser incorporado en aplicaciones que empleen mini-robots donde se requieren especificaciones de bajo peso y pequeñas dimensiones, sin mermar la libertad de movimientos de los robots en sistemas de sensorizado.Este trabajo ha sido financiado por el Ministerio de Educación y Ciencia (MEC) mediante la beca FPU AP2005-1458 y el proyecto DPI2005-06222

    VISUAL : herramienta para la enseñanza práctica de la visión artificial

    Get PDF
    En este artículo se presenta un enfoque práctico para la enseñanza de la visión artificial en la asignatura de “Robots y Sistemas Sensoriales” impartida por el área de Ingeniería de Sistemas y Automática en la Universidad de Alicante. En primer lugar, se describe la herramienta VISUAL que ha sido desarrollada por miembros del grupo de Automático, Robótica y Visión Artificial y que permite al alumno especificar un algoritmo de procesamiento de imágenes mediante un esquema gráfico formado por un conjunto de diferentes módulos básicos de procesamiento. Así, la herramienta VISUAL proporciona un interfaz para la visión artificial de manejo intuitivo, al mismo tiempo que permite desarrollos de algoritmos fácilmente comprensibles gracias a su escalabilidad y modularidad, posibilitando realizar etapas de procesamiento claramente definidas. Además, se comentan algunos de los experimentos prácticos propuestos y desarrollados haciendo uso de VISUAL y destinados al reconocimiento y localización de objetos para su posterior manipulación con un robot

    Virtual remote laboratory for teaching of computer vision and robotics in the University of Alicante

    Get PDF
    Comunicación presentada en IBCE'04, Second IFAC Workshop on Internet Based Control Education, 5-7 septiembre 2004, Grenoble, FranciaIn this article, we describe the virtual and remote laboratory for computer vision and robotics education at the University of Alicante (Spain). Its aims are to provide access for all the students to the available robotic and computer vision equipments, generally limited, due to its high cost

    Detection and depth estimation for domestic waste in outdoor environments by sensors fusion

    Full text link
    In this work, we estimate the depth in which domestic waste are located in space from a mobile robot in outdoor scenarios. As we are doing this calculus on a broad range of space (0.3 - 6.0 m), we use RGB-D camera and LiDAR fusion. With this aim and range, we compare several methods such as average, nearest, median and center point, applied to those which are inside a reduced or non-reduced Bounding Box (BB). These BB are obtained from segmentation and detection methods which are representative of these techniques like Yolact, SOLO, You Only Look Once (YOLO)v5, YOLOv6 and YOLOv7. Results shown that, applying a detection method with the average technique and a reduction of BB of 40%, returns the same output as segmenting the object and applying the average method. Indeed, the detection method is faster and lighter in comparison with the segmentation one. The committed median error in the conducted experiments was 0.0298 ±{\pm} 0.0544 m.Comment: This work has been submitted to IFAC WC 2023 for possible publicatio
    corecore